home *** CD-ROM | disk | FTP | other *** search
- ___________________ Subj: Computer Opponents / Game A.I. ____________________
-
- Fm: andrew hayes 72330,3215 # 182217
- To: All Date: 28-Jun-92 17:42:51
-
- I am currently working on a strategy game based on the famous Tri-Tactics
- game, or, similar to the game Land Sea Air. Simply put, you have a board
- split up into Land or Sea spaces, and you have several different military
- pieces, each slightly stronger then the other. The board is split into a
- 12x12 grid. Each piece can move only forward, back, left or right. When one
- piece moves beside another piece, it has the option to declare an ATTACK. In
- that case, depending on the piece, one, both, or none of the pieces are
- removed from the board.
-
- For example, the ARMY will beat every different type of infantry piece,
- and the only thing that removes the army would be the Heavy Arty. The object
- is to capture the opponents Land H.Q. with an infantry piece or his Naval
- Base with a Ship.
-
- Winners in battles between pieces are not allways as clearly defined as
- "A is stronger, so A wins". In some cases, if both pieces on land, A would
- win, and if both pieces are in the sea B wins, and if one is on land and one
- is on the sea, neither wins.
-
- Inital placement starts with both players placeing his pieces any way
- he wishes, making sure to stay on his side of the board. (5 Lines from the
- bottom of his side). This placement scheme leaves 2 rows in the middle of the
- board, this space is NEUTRAL. (Ships CAN go on land, and infantry CAN go in
- the sea, but if either is spotted by an attacking enemy, they automatically
- are removed).
-
- I have allready writen all the routines to handle the human players
- side of the game (i.e. piece placement, movement, and checking if one piece
- will beat another piece). However, I am stuck on perhaps the hardest part of
- the game: Computer Players AI. As it stands, the computer places its pieces
- pseudo randomly, attempting to place SHIPS on the sea, and all other pieces
- on LAND.
-
- As you can see, placement is not very intellegent. I need a
- simple system that will place pieces according to weighted positions on the
- board, and will change these weighted positions over time as it attempts to
- improve its inital placement strategy.
-
- Another problem is the computers reactions to your moves. As it
- stands, the computer just sits there and does nothing. I need a routine that
- is more than just reactionary (i.e. the computer will move a piece away if it
- knows it is stronger, and attack if it is weaker). Well, you may be able to
- tell I am no game programming expert, this is a project I decided to under
- take to learn more about Borland C++ 3.1. Well, I hope some people here can
- give me a few hints.
-
- Andrew Hayes
- ...........................................................................
-
- Fm: Rick 72377,1274 # 182232
- To: andrew hayes 72330,3215 (X) Date: 28-Jun-92 17:58:10
-
- the type of game you describe (a 0-sum semideterministic game with weighted
- pieces) is similar to quite a few games, your description being closest to
- Stratego. Perhaps you could try to contact the programmer who did that one,
- and adapt his methods to your game.
-
- I have done mathematical analysis on similar games (and will probably start
- researching the effects of weighted pieces sooner or later); can you send me
- a copy of the rules and enough information so that I can make it into a board
- game (you'd get full copy rights).
-
- With a "board game" mockup, I could get an idea for the game, and give you a
- few ideas on how to make enough algorithms for at least a credible AI.
-
- Rick
- ...........................................................................
-
- Fm: Jesse Montrose 76646,3302 # 182672
- To: Rick 72377,1274 Date: 29-Jun-92 21:52:21
-
- Too bad about Stratego, I loved the board game, but I beat the computer every
- time in the Accolade version. Andrew, I hope you end up with something
- better than the Stratego AI!
-
- A couple suggestions: If you want to get really hardcore, you might drop in
- on the Neural Network forum in AIEXPERT, there's a lot of good talent there.
-
- The basic process in developing "standard" AI is playing the game yourself
- and picking apart why you do the things you do. That sounds difficult, but
- it's really not that bad. Assign weights to certain patterns, importance to
- general objectives. This is where a Neural Net can be invaluable, in
- recognizing those patterns later. I've been playing with them for a while
- now, and there's much that can be done with them.
-
- An excellent book is "Neural Networks, algorithms, applications and
- programming techniques" by Freeman and Skapura. Jim Freeman frequents the NN
- forum in AIEXPERT, so you might drop him a line there.
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 182431
- To: andrew hayes 72330,3215 Date: 29-Jun-92 04:41:57
-
- You might be able to get away with some simple search algorithms that only
- go a few levels deep in the tree. For example, use a selection process based
- on having the pieces move towards isolated pieces, or pieces that are weaker
- and unlikely to get support from stronger units in time.
-
- You could also hardwire some basic strategies into it. Have two human
- players try the game, and watch how they play. Then adapt their strategies
- for your computer AI. This can work for both short and long term goals in
- the game.
-
- That was a somewhat vague response, but I don't know how much you already
- know about this stuff.
-
- As for libraries, I'm somewhat partial to the Genus Microprogramming
- package. The event mgr routines it comes with are a little tough to get
- working right, but the library as a whole is unbeatable. I've been working
- with it for about three months now, and am more than pleased. It is a little
- pricey though ($600 or so, but no royalty).
-
- Kevin
- ...........................................................................
-
- Fm: Mark Betz/GD SL 76605,2346 # 207491
- To: Jay Shaffstall 76003,1072 (X) Date: 01-Sep-92 22:09:44
-
- Hi, Jay. Welcome. Jesse and Peter have given you the same advice I'd give. I
- don't think there has been much written on the subject, and unfortunately we
- don't have anything on it in the lib here. It's one of those arcane areas. I
- have a book, now out of print, that describes the use of various data
- structures to implement heuristic searches, with the idea being that any move
- your computer oponent makes can be viewed as taking the most optimal step
- through the tree, in order to reach a goal state from a current state. For
- example, the goal state of a computer fighter pilot is to manuever into
- position for a shot. To get to that state from the current state you may
- employ basic fighter flight tactics, and the problem of "what to do next"
- becomes one of "what is the next maneuver that moves me closer to the goal
- state". The states and maneuvers can be represented in terms of a data
- structure, and there are algorithms for finding the optimal path. That's
- about all the help I can be, though, but you will probably find a good
- explanation in any text on the subject of goal-state traversal of data
- structures.
- --Mark
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 207729
- To: Mark Betz/GD SL 76605,2346 (X) Date: 02-Sep-92 13:32:20
-
- Right, books on data structures are some of the best sources -- also,
- decision analysis and other books that treat alpha beta pruning, etc. The
- bigger problem though is in defining the game you're working on, and
- designing the trees to traverse, and assigning values. If you can find it,
- there's an excellent book called Games Programming by Eric Solomon (Cambridge
- Univ Press, 1984) that's a great introduction.
-
- One of the best AI's turns out to be pure random numbers -- i usually have
- several levels of computer 'intelligence' in my games, and the lowest level
- is often just random choices by the computer. Amazing how many people think
- that the computer is consciously deciding to attack them, when in fact, it's
- just moving along slightly constrained random patterns <g>.
-
- In the online Sniper game, eg, the computer guys have a pretty limited
- knowledge of the game itself -- they get up, look around, and then either
- move or fire or throw a grenade. All that can be done pretty quickly (on CIS
- it HAS to be simple <g>), but it does make a reasonable opponent.
-
- S
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 208110
- To: Mark Betz/GD SL 76605,2346 (X) Date: 03-Sep-92 12:46:39
-
- There are a fair number of books that can help in designing AI -- starting
- with Perla's The Art of Wargaming, which concentrates on boardgames but has
- useful chapters on design. Then there are books like Tuckwell's Elementary
- Applications of Probability Theory, or Ayala's Population & Evolutionary
- Genetics.
-
- The key to remember is that you probably won't find what you want in the
- computer section of the bookstore -- try the economics or science or applied
- math sections, depending on your application.
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 207728
- To: Jay Shaffstall 76003,1072 (X) Date: 02-Sep-92 13:32:09
-
- Most game AI is rudimentary at best -- a lot depends on the game itself --
- is it a game of complete information? are there random events? in chess or
- checkers, you know all the pieces and their positions. moves are constrained
- to very specific patterns, and combat results are completely known. In a war
- game, you dont know where all the pieces are, you know only probabilities of
- movement & combat (if that). In bridge, you have partial information in a
- fixed (52 card) universe, so you can estimate probabilities. Each of these 3
- types of games requires a different approach.
-
- The AI community hasnt really done much in areas of incomplete information
- - even in chess, modified brute force has proven to be the 'best' algorithm.
- ...........................................................................
-
- Fm: Jesse Montrose 76646,3302 # 208258
- To: yngvi 76703,3046 Date: 03-Sep-92 19:58:22
-
- My current game is a tank game. I'm trying to use genetic algorithms to
- 'grow' viable computer tanks.
-
- As it stands, my NN modules are geared toward determining the actions of a
- single entity, rather than the 'teamwork' implied in something like a game of
- chess.
-
- It may well be that the translation to team play will not produce any
- results, but there are enough 'free-for-all' games that I'd like to try my
- hand at.
- ...........................................................................
-
- Fm: Mark Iennaco 71740,2675 # 208176
- To: yngvi 76703,3046 Date: 03-Sep-92 17:09:52
-
- While I agree that the application of game theory is limited, I think there
- is some definite value for the computer opponent application.
-
- For those of you who have never been exposed to Game Theory, It is an
- analysis tecninque for providing the "optimum strategy" (choice pattern) for
- any "two player" situation that can be described by a "payoff matrix".
-
- The payoff matrix is composed of (your play options) x (opponent's play
- options). At the intersection of each option pair is a "payoff" value,
- positive or negative, that determines how well you have done in that
- particular exchange. The results of the analysis is the percentage of times
- you should choose each option at random, including 0% for options that have
- no value.
-
- The problem is in calculating the payoff matrix values. Even in a full
- information game, the results of an option might be statistical. For example,
- if I attack a forward weapon, and he closes and attacks, the value of the
- matrix element will be composed of the chance of my damaging his weapon, the
- game value of the weapon, his chance of damaging me, and the game value of
- him damaging me. In an incomplete information game, you might have to factor
- in the chance that the opposing unit is of a particular type. This means that
- determining the individual values may be impossible to do analytically. At
- which point empirical methods (like monty-carlo) become appropriate.
-
- There are numerous extentions. Examples:e non-zero-sum games, which are
- analysed as games in which each player has their own playoff matrix. Games in
- which the oppenent is known to use "less than optimum" strategys. Besides
- obvious causese, this can occur because of a non-zero-sum situation, in
- which, your oppent utilizes the optimum strategy for his payoff matrix, which
- is not the optimum strategy against your matrix. Also, muti-player games,
- which are analysed as a set of matrices for each pair (combitorial explosion
- time).
-
- I hope that this has shown that the tecnique is appropriate for developing a
- strategy for selecting options during play. It is not likely that it can be
- used for a "general purpose" opponent. However, for any specific game, It
- should be possible to run a monty-carlo simulation on each option pair and
- create a payoff matrix. So for any single game, it could be used for
- stratigic decision making.
-
- The net result of a Games Theory analysis on a given game would be a table of
- options, and their appropriate percentages, for each identifiable game
- situation. You would still need to write the code to (1) identify the
- situation (this is a job for an expert system and/or a neural net) and (2)
- impliment the options.
-
- TakeItEZ
-
- Mark
- ...........................................................................
-
- Fm: Jesse Montrose 76646,3302 # 208260
- To: Mark Iennaco 71740,2675 Date: 03-Sep-92 19:58:32
-
- You've described a problem similar to my most difficult one to date.
-
- I started out with straight backpropagation neural nets, but here's the
- problem:
-
- My network loses the game and dies. In attempting error propagation, where
- do I assign blame? Was it my early choice to attack? Or my later decision
- to back off to repair?
-
- I've turned to genetic algorithms to fill in the gap. I hope to 'grow'
- viable tanks in a simulated arena, after many generations of mutations and
- mating. Fortunately, I've got a 486/50 coming to let my primordal soup
- simmer... [g]
- ...........................................................................
-
- Fm: Jesse Montrose 76646,3302 # 208259
- To: Mark Iennaco 71740,2675 Date: 03-Sep-92 19:58:26
-
- >(2) I suspect that an expert system will be more appropriate than a neural
- net (although I can see how to do it either/both ways).
-
- You're quite right. A neural net is difficult to manage without an expert
- system front end. That's the way I'm working now.
- ...........................................................................
-
- Fm: Mark Iennaco 71740,2675 # 208561
- To: Mark Betz/GD SL 76605,2346 (X) Date: 04-Sep-92 15:26:37
-
- O.K. - Monty Carlo Analysis, Short Version:
-
- (1) First, get yourself a _good_ random number generator.
- (2) Then run a large number of trial runs and keep a record of the results.
- (3) Do a statistical analysis on the results.
-
- As that was probably of less-than-crystaline clarity, I will try an example:
- Two matched game units will attack each other until one is destroyed.
-
- each unit has three damage points
- each unit attacks alternatly with a 50% chance of doing 1 point damage.
- the attacker get first shot.
-
- The possible outcomes are
-
- Unit loses
- Unit wins with 1 point left
- Unit wins with 2 points left
- Unit wins with 3 points left (undamaged)
-
- This situation (note: this is destroyer-vs-destroyer in Empire) is dificult
- to resolve analytically (not impossible) because there is the
- chain-of-misses-to-infinity condition that requires theory of limits to
- resolve (remember first semester calculus?).
-
- Instead, set up a program to run about 10,000 combats to completion. Add one
- to the appropriate bin after each run. After all runs divide the number in
- the bin by the number of runs to get the fraction for that outcome.
-
- That is a Monty Carlo Analysis.
-
- To use this in a Payoff Matrix for a Games Theory analysis, you would assign
- a value to each outcome, and then computed the weighted sum (value*fraction)
- of all outcomes. The weighted sum becomes the value for the appropriate entry
- in the Payoff Matrix. This particular example becomes the value for all
- entries on the Attack Opponent row since the opponents response will be
- immaterial to the outcome. This is not always true.
-
- There are other aspects to the calculation. In this particular game (Empire),
- if this is early in the game, your destroyer may have a higher game value (it
- is good for searching for teritory, and chasing transports) than later in the
- game (when everything is colonized). This affects the values of the outcomes
- in the Matrix, espcially if they are not matched units.
-
- The presence of other units in the vicinity could also affect the values. A
- heavly damaged destroyer can no longer outrun a cruiser. If an enemy cruiser
- is nearby, the Unit wins w/1 outcome has a lower value. OTOH, if one of your
- cruisers is nearby (so that it can protect the damaged unit on its way back
- for repair), that outcome will be worth more.
-
- The dynamics of evaluation-of-the-situation are significantly more complex
- than the choice-of-options once they have been determined.
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 208579
- To: Mark Betz/GD SL 76605,2346 (X) Date: 04-Sep-92 16:48:29
-
- it's Monte Carlo -- as in race track/casino, not holy grail <g>...
-
- Monte Carlo simulation basically involves setting up an equation for your
- system, then picking random values (based on some distribution that you
- decide applies) to plug in and calculate the result. Doing this once doesnt
- do you much good, of course, so you repeat it 100's or 1000's of times, and
- plot a histogram of the results. It's a useful technique for fuzzy problems
- where you dont have tight numbers.
-
- Eg, one real world example I've used it for is range estimating. In doing
- cost estimating for large construction projects there are many variables that
- you cant control -- inflation, utilities costs, labor charges, etc. Picking
- one value for each doesnt work either, so what you do is construct a
- distribution for each (eg, "inflation will vary between 3% and 5%"), then run
- your estimate using a set of selections. You repeat this multiple times, and
- you get a histogram of possible total costs. The answer you get isnt a
- single number. instead you can say "we have a 80% chance that the costs will
- be $XXXM or less", etc.
-
- It's a useful technique for simulating an economy in a game.
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 213696
- To: Jesse Montrose 76646,3302 Date: 16-Sep-92 12:51:09
-
- A couple more things to think about as units start to work together:
-
- One reason why wargames are so hard to develop AI opponents for is team play
- -- selecting the best move for a particular unit is usually straightforward;
- doing so for the entire faction is much more difficult. just defining the
- 'front' is a problem ( Chris Crawford had a short article on that in a past
- JCGD). assigning tasks across all units is the problem -you have to assign
- sufficient units to coordinate an attack, but not so many that you get
- trafffic jams, and clusters of attackers while leaving holes in your line.
-
- Another problem is time related -- consider the case of Jackson's move
- around Hooker at Chancellorsville. a move by move algorithm would probably
- have him return to the sound of the guns before completing the maneuver.
- that's what happens in a lot of AI -- you get them bouncing back & forth
- between 2 options.
- ...........................................................................
-
- Fm: Roger Keating/SSG 72000,3257 # 308251
- To: Steve Bennett 76702,1071 (X) Date: 04-Mar-93 16:22:21
-
- Steve,
-
- The 'CAW Construction Kit' will be out in about a month. This kit includes
- the AI editor for 'CAW' including tutorial and copious explanations of the
- techniques and applications used in the game. I believe that this is a must
- for anyone interested in AI techniques.
-
- Roger.
- ...........................................................................
-
- Fm: Tim Koffley 70334,16 # 374532
- To: All Date: 12-Jun-93 16:15:07
-
- I've always wanted to do a 2-player game wherein each player controls a
- team of "pieces" combatting against the other player on a fixed field and I
- need the know-how to create a computer opponent. Each player's actions are
- basically move and then attack.
-
- I think what I need are texts on game trees as the "board" is fixed and
- each player knows where every piece is at all times; I'm not sure. The only
- books I've found on gaming are pretty mathematical in nature w/o any good
- examples or applications. Does anyone know of any good books with thorough
- application and examples of such things? Am I barking up the wrong tree?
-
- Thanks,
- -tk
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 375163
- To: Tim Koffley 70334,16 (X) Date: 13-Jun-93 14:02:37
-
- Depends on the type of game you want to do. There was a good thread here a
- few months ago about designing AI for wargames (should still be in the
- libs?). Trees may not be the solution you want. In the SNIPER! online game,
- eg, I have pretty severe restraints in terms of the amount of cpu time I can
- use, so the AI is minimal. But some simple heuristics can help to give a
- fair opponent. In this case, the AI isn't that important, since the main
- reason for playing the game is to play other humans. Despite this, I still
- get a fair number of feedbacks that claim the AI is too smart and even
- devious <g>.
-
- The Sniper AI really has only a few rules such as "if fired on, return
- fire". That one rule causes a lot of trouble for the player, since it's
- reactive. Other rules include "fire at a target that you have a reasonable
- chance of hitting", and "if no targets available, move towards the last known
- enemy". These rules take no trees to implement, and little information needs
- to be processed.
- ...........................................................................
-
-